素描是一种常用于创新过程的自然和有效的视觉通信介质。深度学习模型的最新发展急剧改善了理解和生成视觉内容的机器能力。令人兴奋的发展领域探讨了用于模拟人类草图的深度学习方法,开设创造性应用的机会。本章介绍了开发深受学习驱动的创造性支持工具的三个基本步骤,这些步骤消耗和生成草图:1)在草图和移动用户界面之间生成新配对数据集的数据收集工作; 2)基于草图的用户界面检索系统,适用于最先进的计算机视觉技术; 3)一个对话的草图系统,支持基于自然语言的草图/批判创作过程的新颖互动。在本章中,我们在深度学习和人机互动社区中进行了对相关的事先工作,详细记录了数据收集过程和系统的架构,目前提供了定性和定量结果,并绘制了几个未来研究的景观在这个令人兴奋的地区的方向。
translated by 谷歌翻译
Summary quality assessment metrics have two categories: reference-based and reference-free. Reference-based metrics are theoretically more accurate but are limited by the availability and quality of the human-written references, which are both difficulty to ensure. This inspires the development of reference-free metrics, which are independent from human-written references, in the past few years. However, existing reference-free metrics cannot be both zero-shot and accurate. In this paper, we propose a zero-shot but accurate reference-free approach in a sneaky way: feeding documents, based upon which summaries generated, as references into reference-based metrics. Experimental results show that this zero-shot approach can give us the best-performing reference-free metrics on nearly all aspects on several recently-released datasets, even beating reference-free metrics specifically trained for this task sometimes. We further investigate what reference-based metrics can benefit from such repurposing and whether our additional tweaks help.
translated by 谷歌翻译
Question-and-answer formats provide a novel experimental platform for investigating cybersecurity questions. Unlike previous chatbots, the latest ChatGPT model from OpenAI supports an advanced understanding of complex coding questions. The research demonstrates thirteen coding tasks that generally qualify as stages in the MITRE ATT&CK framework, ranging from credential access to defense evasion. With varying success, the experimental prompts generate examples of keyloggers, logic bombs, obfuscated worms, and payment-fulfilled ransomware. The empirical results illustrate cases that support the broad gain of functionality, including self-replication and self-modification, evasion, and strategic understanding of complex cybersecurity goals. One surprising feature of ChatGPT as a language-only model centers on its ability to spawn coding approaches that yield images that obfuscate or embed executable programming steps or links.
translated by 谷歌翻译
探讨了将数据驱动对象检测器的不确定性结合到对象跟踪算法中的不确定性的方法。对象跟踪方法依赖于测量误差模型,通常以测量噪声,假阳性率和错过检测速率的形式。通常,这些数量通常可以取决于物体或测量位置。然而,对于从神经网络处理的摄像机输入产生的检测,这些测量误差统计不足以表示主要错误源,即运行时传感器输入与检测器训练的训练数据之间的不相似性。为此,我们调查将数据不确定性纳入物体跟踪方法,例如提高跟踪物体的能力,特别是那些超出的能力。培训数据。所提出的方法在对象跟踪基准上验证以及具有真正自治飞机的实验。
translated by 谷歌翻译
简单的动态模型可以在大型网络中产生复杂的行为。这些行为通常可以在由网络网络捕获的各种物理系统中观察到。在这里,我们描述了一种现象,其中尺寸自始终产生由于动力学不稳定性而产生的力场。这可以被理解为在有效潜力的最小值之间的不稳定(“隆隆声”)隧道机构。我们将该集体和非触发效果成为“Lyapunov力”,即使完整系统具有与系统尺寸指数呈指数呈指数呈指数增长的均衡点的星座,使系统朝向全局最小的潜在功能。我们研究的系统具有简单的映射到流量网络,其等于电流驱动的映像器。该机制在纳米级物理学中对其物理相关性进行了吸引力,以及在优化中可能的应用,新颖的蒙特卡罗方案和机器学习。
translated by 谷歌翻译
储层计算是一种使用高维动力系统或\ emph {Reservoir}的机器学习范式,以近似和预测时间序列数据。可以通过从电子电路中构造储层来增强储层计算机的规模,速度和功率使用,并且一些实验研究证明了这一方向的希望。但是,设计质量储层需要精确理解此类电路如何处理和存储信息。我们分析了包括线性元件(电阻器,电感器和电容器)和称为MEMRISTOR的非线性记忆元件的电子储层的可行性和最佳设计。我们提供了有关这些储层的可行性的分析结果,并通过检查它们可以近似的输入输出关系的类型来对其计算属性进行系统的表征。这使我们能够设计具有最佳属性的储层。通过引入储层的总线性和非线性计算能力的衡量标准,我们能够设计其总计算能力随系统尺寸广泛规模的电子电路。我们的电子储层可以以可能直接在硬件中实现的形式匹配或超过常规“ Echo State Network”储层的性能。
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译
Supervised Question Answering systems (QA systems) rely on domain-specific human-labeled data for training. Unsupervised QA systems generate their own question-answer training pairs, typically using secondary knowledge sources to achieve this outcome. Our approach (called PIE-QG) uses Open Information Extraction (OpenIE) to generate synthetic training questions from paraphrased passages and uses the question-answer pairs as training data for a language model for a state-of-the-art QA system based on BERT. Triples in the form of <subject, predicate, object> are extracted from each passage, and questions are formed with subjects (or objects) and predicates while objects (or subjects) are considered as answers. Experimenting on five extractive QA datasets demonstrates that our technique achieves on-par performance with existing state-of-the-art QA systems with the benefit of being trained on an order of magnitude fewer documents and without any recourse to external reference data sources.
translated by 谷歌翻译
Transformer has achieved impressive successes for various computer vision tasks. However, most of existing studies require to pretrain the Transformer backbone on a large-scale labeled dataset (e.g., ImageNet) for achieving satisfactory performance, which is usually unavailable for medical images. Additionally, due to the gap between medical and natural images, the improvement generated by the ImageNet pretrained weights significantly degrades while transferring the weights to medical image processing tasks. In this paper, we propose Bootstrap Own Latent of Transformer (BOLT), a self-supervised learning approach specifically for medical image classification with the Transformer backbone. Our BOLT consists of two networks, namely online and target branches, for self-supervised representation learning. Concretely, the online network is trained to predict the target network representation of the same patch embedding tokens with a different perturbation. To maximally excavate the impact of Transformer from limited medical data, we propose an auxiliary difficulty ranking task. The Transformer is enforced to identify which branch (i.e., online/target) is processing the more difficult perturbed tokens. Overall, the Transformer endeavours itself to distill the transformation-invariant features from the perturbed tokens to simultaneously achieve difficulty measurement and maintain the consistency of self-supervised representations. The proposed BOLT is evaluated on three medical image processing tasks, i.e., skin lesion classification, knee fatigue fracture grading and diabetic retinopathy grading. The experimental results validate the superiority of our BOLT for medical image classification, compared to ImageNet pretrained weights and state-of-the-art self-supervised learning approaches.
translated by 谷歌翻译
Knowledge graph embedding (KGE), which maps entities and relations in a knowledge graph into continuous vector spaces, has achieved great success in predicting missing links in knowledge graphs. However, knowledge graphs often contain incomplete triples that are difficult to inductively infer by KGEs. To address this challenge, we resort to analogical inference and propose a novel and general self-supervised framework AnKGE to enhance KGE models with analogical inference capability. We propose an analogical object retriever that retrieves appropriate analogical objects from entity-level, relation-level, and triple-level. And in AnKGE, we train an analogy function for each level of analogical inference with the original element embedding from a well-trained KGE model as input, which outputs the analogical object embedding. In order to combine inductive inference capability from the original KGE model and analogical inference capability enhanced by AnKGE, we interpolate the analogy score with the base model score and introduce the adaptive weights in the score function for prediction. Through extensive experiments on FB15k-237 and WN18RR datasets, we show that AnKGE achieves competitive results on link prediction task and well performs analogical inference.
translated by 谷歌翻译